本文介绍了一种机器学习方法,可以在宏观水平下模拟电动车辆的电力消耗,即在不存在速度轮廓,同时保持微观级别精度。对于这项工作,我们利用了基于代理的代理的运输工具来模拟了在各种场景变化的大芝加哥地区发生的模型旅行,以及基于物理的建模和仿真工具,以提供高保真能量消耗值。产生的结果构成了车辆路径能量结果的非常大的数据集,其捕获车辆和路由设置的可变性,并且掩盖了车速动力学的高保真时间序列。我们表明,尽管掩盖了影响能量消耗的所有内部动态,但是可以以深入的学习方法准确地学习聚合级能量消耗值。当有大规模数据可用,并且仔细量身定制的功能工程,精心设计的模型可以克服和检索潜在信息。该模型已部署并集成在Polaris运输系统仿真工具中,以支持各个充电决策的实时行为运输模型,以及电动车辆的重新排出。
translated by 谷歌翻译
3D场景图(3DSG)是新兴的描述;统一符号,拓扑和度量场景表示。但是,典型的3DSG即使在小环境中包含数百个对象和符号。完整图上的任务计划是不切实际的。我们构建任务法,这是第一个大规模的机器人任务计划基准3DSGS。尽管大多数基准在该领域的基准努力都集中在基于愿景的计划上,但我们系统地研究了符号计划,以使计划绩效与视觉表示学习相结合。我们观察到,在现有方法中,基于经典和学习的计划者都不能在完整的3DSG上实时计划。实现实时计划需要(a)稀疏3DSG进行可拖动计划的进展,以及(b)设计更好利用3DSG层次结构的计划者。针对前一个目标,我们提出了磨砂膏,这是一种由任务条件的3DSG稀疏方法。使经典计划者能够匹配,在某些情况下可以超过最新的学习计划者。我们提出寻求后一个目标,这是一种使学习计划者能够利用3DSG结构的程序,从而减少了当前最佳方法所需的重型查询数量的数量级。我们将开放所有代码和基线,以刺激机器人任务计划,学习和3DSGS的交叉点进行进一步的研究。
translated by 谷歌翻译
可区分的仿真是用于基于快速梯度的策略优化和系统识别的有前途的工具包。但是,现有的可区分仿真方法在很大程度上已经解决了获得平滑梯度相对容易的方案,例如具有光滑动力学的系统。在这项工作中,我们研究了可区分的模拟所面临的挑战,当时单个下降不可行,这通常是全球最佳的,这通常是接触率丰富的方案中的问题。我们分析包含刚体和可变形物体的各种情况的优化景观。在具有高度可变形的物体和流体的动态环境中,可区分的模拟器在空间的某些地方生产具有有用梯度的坚固景观。我们提出了一种将贝叶斯优化与半本地“飞跃”相结合的方法,以获得可以有效使用梯度的全局搜索方法,同时还可以在具有嘈杂梯度的地区保持稳健的性能。我们表明,我们的方法在模拟中的一组实验集上优于几个基于梯度和无梯度的基线,并且还使用具有真实机器人和变形物的实验验证该方法。视频和补充材料可从https://tinyurl.com/globdiff获得
translated by 谷歌翻译
In this paper we explore the task of modeling (semi) structured object sequences; in particular we focus our attention on the problem of developing a structure-aware input representation for such sequences. In such sequences, we assume that each structured object is represented by a set of key-value pairs which encode the attributes of the structured object. Given a universe of keys, a sequence of structured objects can then be viewed as an evolution of the values for each key, over time. We encode and construct a sequential representation using the values for a particular key (Temporal Value Modeling - TVM) and then self-attend over the set of key-conditioned value sequences to a create a representation of the structured object sequence (Key Aggregation - KA). We pre-train and fine-tune the two components independently and present an innovative training schedule that interleaves the training of both modules with shared attention heads. We find that this iterative two part-training results in better performance than a unified network with hierarchical encoding as well as over, other methods that use a {\em record-view} representation of the sequence \cite{de2021transformers4rec} or a simple {\em flattened} representation of the sequence. We conduct experiments using real-world data to demonstrate the advantage of interleaving TVM-KA on multiple tasks and detailed ablation studies motivating our modeling choices. We find that our approach performs better than flattening sequence objects and also allows us to operate on significantly larger sequences than existing methods.
translated by 谷歌翻译
Deep learning techniques with neural networks have been used effectively in computational fluid dynamics (CFD) to obtain solutions to nonlinear differential equations. This paper presents a physics-informed neural network (PINN) approach to solve the Blasius function. This method eliminates the process of changing the non-linear differential equation to an initial value problem. Also, it tackles the convergence issue arising in the conventional series solution. It is seen that this method produces results that are at par with the numerical and conventional methods. The solution is extended to the negative axis to show that PINNs capture the singularity of the function at $\eta=-5.69$
translated by 谷歌翻译
The generalisation performance of a convolutional neural networks (CNN) is majorly predisposed by the quantity, quality, and diversity of the training images. All the training data needs to be annotated in-hand before, in many real-world applications data is easy to acquire but expensive and time-consuming to label. The goal of the Active learning for the task is to draw most informative samples from the unlabeled pool which can used for training after annotation. With total different objective, self-supervised learning which have been gaining meteoric popularity by closing the gap in performance with supervised methods on large computer vision benchmarks. self-supervised learning (SSL) these days have shown to produce low-level representations that are invariant to distortions of the input sample and can encode invariance to artificially created distortions, e.g. rotation, solarization, cropping etc. self-supervised learning (SSL) approaches rely on simpler and more scalable frameworks for learning. In this paper, we unify these two families of approaches from the angle of active learning using self-supervised learning mainfold and propose Deep Active Learning using BarlowTwins(DALBT), an active learning method for all the datasets using combination of classifier trained along with self-supervised loss framework of Barlow Twins to a setting where the model can encode the invariance of artificially created distortions, e.g. rotation, solarization, cropping etc.
translated by 谷歌翻译
Generative AI has matured to a point where large-scale models can generate text that seems indistinguishable from human-written text and remarkably photorealistic images. Automatically measuring how close the distribution of generated data is to the target real data distribution is a key step in diagnosing existing models and developing better models. We present MAUVE, a family of comparison measures between pairs of distributions such as those encountered in the generative modeling of text or images. These scores are statistical summaries of divergence frontiers capturing two types of errors in generative modeling. We explore four approaches to statistically estimate these scores: vector quantization, non-parametric estimation, classifier-based estimation, and parametric Gaussian approximations. We provide statistical bounds for the vector quantization approach. Empirically, we find that the proposed scores paired with a range of $f$-divergences and statistical estimation methods can quantify the gaps between the distributions of human-written text and those of modern neural language models by correlating with human judgments and identifying known properties of the generated texts. We conclude the paper by demonstrating its applications to other AI domains and discussing practical recommendations.
translated by 谷歌翻译
Agile robotics presents a difficult challenge with robots moving at high speeds requiring precise and low-latency sensing and control. Creating agile motion that accomplishes the task at hand while being safe to execute is a key requirement for agile robots to gain human trust. This requires designing new approaches that are flexible and maintain knowledge over world constraints. In this paper, we consider the problem of building a flexible and adaptive controller for a challenging agile mobile manipulation task of hitting ground strokes on a wheelchair tennis robot. We propose and evaluate an extension to work done on learning striking behaviors using a probabilistic movement primitive (ProMP) framework by (1) demonstrating the safe execution of learned primitives on an agile mobile manipulator setup, and (2) proposing an online primitive refinement procedure that utilizes evaluative feedback from humans on the executed trajectories.
translated by 谷歌翻译
Inferring knowledge from clinical trials using knowledge graph embedding is an emerging area. However, customizing graph embeddings for different use cases remains a significant challenge. We propose custom2vec, an algorithmic framework to customize graph embeddings by incorporating user preferences in training the embeddings. It captures user preferences by adding custom nodes and links derived from manually vetted results of a separate information retrieval method. We propose a joint learning objective to preserve the original network structure while incorporating the user's custom annotations. We hypothesize that the custom training improves user-expected predictions, for example, in link prediction tasks. We demonstrate the effectiveness of custom2vec for clinical trials related to non-small cell lung cancer (NSCLC) with two customization scenarios: recommending immuno-oncology trials evaluating PD-1 inhibitors and exploring similar trials that compare new therapies with a standard of care. The results show that custom2vec training achieves better performance than the conventional training methods. Our approach is a novel way to customize knowledge graph embeddings and enable more accurate recommendations and predictions.
translated by 谷歌翻译
Code generation models have achieved impressive performance. However, they tend to be brittle as slight edits to a prompt could lead to very different generations; these robustness properties, critical for user experience when deployed in real-life applications, are not well understood. Most existing works on robustness in text or code tasks have focused on classification, while robustness in generation tasks is an uncharted area and to date there is no comprehensive benchmark for robustness in code generation. In this paper, we propose ReCode, a comprehensive robustness evaluation benchmark for code generation models. We customize over 30 transformations specifically for code on docstrings, function and variable names, code syntax, and code format. They are carefully designed to be natural in real-life coding practice, preserve the original semantic meaning, and thus provide multifaceted assessments of a model's robustness performance. With human annotators, we verified that over 90% of the perturbed prompts do not alter the semantic meaning of the original prompt. In addition, we define robustness metrics for code generation models considering the worst-case behavior under each type of perturbation, taking advantage of the fact that executing the generated code can serve as objective evaluation. We demonstrate ReCode on SOTA models using HumanEval, MBPP, as well as function completion tasks derived from them. Interesting observations include: better robustness for CodeGen over InCoder and GPT-J; models are most sensitive to syntax perturbations; more challenging robustness evaluation on MBPP over HumanEval.
translated by 谷歌翻译